856 research outputs found

    Overlapping: a R package for Estimating Overlapping in Empirical Distributions

    Get PDF
    overlapping is an R package for estimating the overlapping area of two or more kernel density estimations from empirical data. The main idea of the package is to offer an easy way to quantify the similarity (or the difference) between two or more empirical distributions

    ssMousetrack: Analysing computerized tracking data via Bayesian state-space models in {R}

    Full text link
    Recent technological advances have provided new settings to enhance individual-based data collection and computerized-tracking data have became common in many behavioral and social research. By adopting instantaneous tracking devices such as computer-mouse, wii, and joysticks, such data provide new insights for analysing the dynamic unfolding of response process. ssMousetrack is a R package for modeling and analysing computerized-tracking data by means of a Bayesian state-space approach. The package provides a set of functions to prepare data, fit the model, and assess results via simple diagnostic checks. This paper describes the package and illustrates how it can be used to model and analyse computerized-tracking data. A case study is also included to show the use of the package in empirical case studies

    Empirical Scenarios of Fake Data Analysis: The Sample Generation by Replacement (SGR) Approach

    Get PDF
    Many self-report measures of attitudes, beliefs, personality, and pathology include items whose responses can be easily manipulated or distorted, as an example in order to give a positive impression to others, to obtain financial compensation, to avoid being charged with a crime, to get a job, or else. This fact confronts both researchers and practitioners with the crucial problem of biases yielded by the usage of standard statistical models. The current paper presents three empirical applications to the issue of faking of a recent probabilistic perturbation procedure called Sample Generation by Replacement (SGR; Lombardi and Pastore, 2012). With the intent to study the behavior of some statistics under fake perturbation and data reconstruction processes, ad-hoc faking scenarios were implemented and tested. Overall, results proved that SGR could be successfully applied both in the case of research designs traditionally proposed in order to deal with faking (e.g., use of fake-detecting scales, experimentally induced faking, or contrasting applicants vs. incumbents), and in the case of ecological research settings, where no information as regards faking could be collected by the researcher or the practitioner. Implications and limitations are presented and discussed

    A Maximum Entropy Procedure to Solve Likelihood Equations

    Get PDF
    In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon\u2019s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth\u2019s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation

    Working Memory Training for Healthy Older Adults: The Role of Individual Characteristics in Explaining Short- and Long-Term Gains

    Get PDF
    Objective: The aim of the present study was to explore whether individual characteristics such as age, education, vocabulary, and baseline performance in a working memory (WM) task\u2014similar to the one used in the training (criterion task)\u2014predict the short- and long-term specific gains and transfer effects of a verbal WM training for older adults. Method: Four studies that adopted the Borella et al. (2010) verbal WM training procedure were found eligible for our analysis as they included: healthy older adults who attended either the training sessions (WM training group), or alternative activities (active control group); the same measures for assessing specific gains (on the criterion WM task), and transfer effects (nearest on a visuo-spatial WM task, near on short-term memory tasks and far on a measure of fluid intelligence, a measure of processing speed and two inhibitory measures); and a follow-up session. Results: Linear mixed models confirmed the overall efficacy of the training, in the short-term at least, and some maintenance effects. In the trained group, the individual characteristics considered were found to contribute (albeit only modestly in some cases) to explaining the effects of the training. Conclusions: Overall, our findings suggest the importance of taking individual characteristics and individual differences into account when examining WMtraining gains in older adults

    Comparing Different Methods for Multiple Testing in Reaction Time Data

    Get PDF
    Reaction times were simulated for examining the power of six methods for multiple testing, as a function of sample size and departures from normality. Power estimates were low for all methods for non-normal distributions. With normal distributions, even for small sample sizes, satisfactory power estimates were observed, especially for FDR-based procedures

    Adolescent Gambling-Oriented Attitudes Mediate the Relationship Between Perceived Parental Knowledge and Adolescent Gambling: Implications for Prevention

    Get PDF
    Although substantial research has provided support for the association between parental practices and adolescent gambling, less is known about the role of adolescent attitudes in this relationship. The primary purpose of this study was to test an integrative model linking perceived parental knowledge (children’s perceptions of their parents’ knowledge of their whereabouts and companions) with adolescent gambling while evaluating the mediating effects of adolescents’ own gambling approval, risk perception of gambling, and descriptive norms on gambling shared with friends. The data were drawn from the ESPAD® Italia 2012 (European School Survey Project on Alcohol and Other Drugs) study, which is based on a nationally representative sample of Italian adolescent students aged 15–19. The analysis was carried out on a subsample of 19,573 subjects (average age 17.11, 54 % girls). Self-completed questionnaires were administered in the classroom setting. The results revealed that adolescents who perceived higher levels of parental knowledge were more likely to disapprove of gambling and show higher awareness of its harmfulness, which were in turn negatively related to gambling frequency. They were also less likely to perceive their friends as gamblers, which was also negatively related to gambling frequency. These findings suggest that gambling prevention efforts should consider perceived parental knowledge and gambling-oriented attitudes (self-approval, risk perception, and descriptive norms) as factors that may buffer adolescent gambling behavior in various situations

    The effects of perceived competence and sociability on electoral outcomes

    Full text link
    Previous research demonstrated that inferences of competence from the face are good predictors of electoral outcomes (Todorov et al., 2005). In the current work we examined the role of another key dimension in social perception, namely perceived sociability. Results showed that people considered both competence and sociability, as inferred from the face, as related to higher chances of winning the elections. A different pattern emerged in relation to the actual electoral outcomes. Indeed, perceived competence was related to higher chances of winning, whereas perceived sociability was negatively related to electoral success. It is thus shown that these two fundamental dimensions in social perception exert opposite effects on voting behaviors

    Measuring Distribution Similarities Between Samples: A Distribution-Free Overlapping Index

    Get PDF
    Every day cognitive and experimental researchers attempt to find evidence in support of their hypotheses in terms of statistical differences or similarities among groups. The most typical cases involve quantifying the difference of two samples in terms of their mean values using the t statistic or other measures, such as Cohen's d or U metrics. In both cases the aim is to quantify how large such differences have to be in order to be classified as notable effects. These issues are particularly relevant when dealing with experimental and applied psychological research. However, most of these standard measures require some distributional assumptions to be correctly used, such as symmetry, unimodality, and well-established parametric forms. Although these assumptions guarantee that asymptotic properties for inference are satisfied, they can often limit the validity and interpretability of results. In this article we illustrate the use of a distribution-free overlapping measure as an alternative way to quantify sample differences and assess research hypotheses expressed in terms of Bayesian evidence. The main features and potentials of the overlapping index are illustrated by means of three empirical applications. Results suggest that using this index can considerably improve the interpretability of data analysis results in psychological research, as well as the reliability of conclusions that researchers can draw from their studies

    A Closer Look at Self-training for Zero-Label Semantic Segmentation

    Get PDF
    Being able to segment unseen classes not observed during training is an important technical challenge in deep learning, because of its potential to reduce the expensive annotation required for semantic segmentation. Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models. However, they are prone to overfitting on the seen classes because there is no training signal for them. In this paper, we study the challenging generalized zero-label semantic segmentation task where the model has to segment both seen and unseen classes at test time. We assume that pixels of unseen classes could be present in the training images but without being annotated. Our idea is to capture the latent information on unseen classes by supervising the model with self-produced pseudo-labels for unlabeled pixels. We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image. Our framework generates pseudo-labels and then retrain the model with human-annotated and pseudo-labelled data. This procedure is repeated for several iterations. As a result, our approach achieves the new state-of-the-art on PascalVOC12 and COCO-stuff datasets in the challenging generalized zero-label semantic segmentation setting, surpassing other existing methods addressing this task with more complex strategies. Code can be found at https://github. com/giuseppepastore10/STRICT
    • …
    corecore